Local Smoothness in Variance Reduced Optimization
نویسندگان
چکیده
We propose a family of non-uniform sampling strategies to provably speed up a class of stochastic optimization algorithms with linear convergence including Stochastic Variance Reduced Gradient (SVRG) and Stochastic Dual Coordinate Ascent (SDCA). For a large family of penalized empirical risk minimization problems, our methods exploit data dependent local smoothness of the loss functions near the optimum, while maintaining convergence guarantees. Our bounds are the first to quantify the advantage gained from local smoothness which are significant for some problems significantly better. Empirically, we provide thorough numerical results to back up our theory. Additionally we present algorithms exploiting local smoothness in more aggressive ways, which perform even better in practice.
منابع مشابه
An Empirical Analysis of Local Search in Stochastic Optimization for Planner Strategy Selection
Optimization of expected values in a stochastic domain is common in real world applications. However, it is often difficult to solve such optimization problems without significant knowledge about the surface defined by the stochastic function. I n this paper we examine local search techniques to solve stochastic Optimization. I n particular, we analyze assumptions of smoothness upon which these...
متن کاملFast Stochastic Variance Reduced ADMM for Stochastic Composition Optimization
We consider the stochastic composition optimization problem proposed in [17], which has applications ranging from estimation to statistical and machine learning. We propose the first ADMM-based algorithm named com-SVRADMM, and show that com-SVR-ADMM converges linearly for strongly convex and Lipschitz smooth objectives, and has a convergence rate of O(logS/S), which improves upon the O(S−4/9) r...
متن کاملThird-order Smoothness Helps: Even Faster Stochastic Optimization Algorithms for Finding Local Minima
We propose stochastic optimization algorithms that can find local minima faster than existing algorithms for nonconvex optimization problems, by exploiting the third-order smoothness to escape non-degenerate saddle points more efficiently. More specifically, the proposed algorithm only needs Õ( −10/3) stochastic gradient evaluations to converge to an approximate local minimum x, which satisfies...
متن کاملNonstationary Gaussian Process Regression Using Point Estimates of Local Smoothness
Gaussian processes using nonstationary covariance functions are a powerful tool for Bayesian regression with input-dependent smoothness. A common approach is to model the local smoothness by a latent process that is integrated over using Markov chain Monte Carlo approaches. In this paper, we show that a simple approximation that uses the estimated mean of the local smoothness yields good result...
متن کاملImage Deblurring using Split Bregman Iterative Algorithm
This paper presents a new variational algorithm for image deblurring by characterizing the properties of image local smoothness and nonlocal self-similarity simultaneously. Specifically, the local smoothness is measured by a Total Variation method, enforcing the local smoothness of images, while the nonlocal self similarity is measured by transforming the 3D array generated by grouping similar ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2015